Temporal concatenation for Markov decision processes
نویسندگان
چکیده
We propose and analyze a temporal concatenation heuristic for solving large-scale finite-horizon Markov decision processes (MDP), which divides the MDP into smaller sub-problems along time horizon generates an overall solution by simply concatenating optimal solutions from these sub-problems. As “black box” architecture, works with wide range of existing algorithms. Our main results characterize regret compared to solution. provide upper bounds general instances, as well family instances in are shown be tight. Together, our demonstrate concatenation's potential substantial speed-up at expense some performance degradation.
منابع مشابه
Accelerated decomposition techniques for large discounted Markov decision processes
Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorith...
متن کاملBounded Parameter Markov Decision Processes Bounded Parameter Markov Decision Processes
In this paper, we introduce the notion of a bounded parameter Markov decision process as a generalization of the traditional exact MDP. A bounded parameter MDP is a set of exact MDPs speciied by giving upper and lower bounds on transition probabilities and rewards (all the MDPs in the set share the same state and action space). Bounded parameter MDPs can be used to represent variation or uncert...
متن کاملLearning Qualitative Markov Decision Processes Learning Qualitative Markov Decision Processes
To navigate in natural environments, a robot must decide the best action to take according to its current situation and goal, a problem that can be represented as a Markov Decision Process (MDP). In general, it is assumed that a reasonable state representation and transition model can be provided by the user to the system. When dealing with complex domains, however, it is not always easy or pos...
متن کاملOrdinal Decision Models for Markov Decision Processes
Setting the values of rewards in Markov decision processes (MDP) may be a difficult task. In this paper, we consider two ordinal decision models for MDPs where only an order is known over rewards. The first one, which has been proposed recently in MDPs [23], defines preferences with respect to a reference point. The second model, which can been viewed as the dual approach of the first one, is b...
متن کاملBounds for Markov Decision Processes
We consider the problem of producing lower bounds on the optimal cost-to-go function of a Markov decision problem. We present two approaches to this problem: one based on the methodology of approximate linear programming (ALP) and another based on the so-called martingale duality approach. We show that these two approaches are intimately connected. Exploring this connection leads us to the prob...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Probability in the Engineering and Informational Sciences
سال: 2021
ISSN: ['1469-8951', '0269-9648']
DOI: https://doi.org/10.1017/s0269964821000206